I Built a Cryptographic Identity Layer for AI Agents. Here's Why.
A few months ago I was reading about how Perplexity got caught scraping websites by disguising their crawler as a regular Chrome browser. No identity, no declaration of intent, just a bot pretending to be a human. The websites had no way to tell the difference.
That bothered me. Not because scraping is inherently bad β automation is useful β but because there was no mechanism for a bot to say "hey, I'm a bot, here's who I am, here's what I'm doing." The web had no passport system for automated agents. And with AI agents about to flood the internet, that gap was going to get a lot more painful.
So I started building one.
1. The Problem
Right now, when any AI agent hits your API, you see a request. That's it. You don't know if it's a legitimate automation from a paying customer, a scraper, a bot farm, or someone testing your system. They all look identical at the HTTP level.
Your options are basically:
- Block all automation β hurts your legitimate users
- Allow everything β gets abused
- Use IP-based rate limiting β trivially bypassed with proxies
- Require API keys β proves nothing about intent, static permissions, leakable
- robots.txt β purely advisory, bad actors just ignore it
None of these actually solve the problem. They're all guessing. What if instead of guessing, you could verify?
"Move from guessing intent to verifying declared intent."
That's the idea behind Supelock. Every agent gets a cryptographic identity. Every request it makes is signed with that identity and declares what it intends to do. The server verifies the signature and decides what access to give β based on proof, not heuristics.
2. How it works
I built the system in four parts. Each one handles a specific layer of the problem.
agent signs requests
stores public keys
verifies + enforces
monitors + controls
The SDK runs on the agent side. When an agent makes an HTTP request, the SDK builds a payload containing the actor ID, method, path, declared intent, a nonce, and expiry timestamp. It signs this payload with an Ed25519 private key that never leaves the machine. The signed token gets attached as an HTTP header.
The request now carries two extra headers: X-Supelock-Actor and X-Supelock-Intent. The server can verify both.
The Registry is a small FastAPI service that maps actor IDs to public keys. When an agent registers for the first time, it submits its public key. When a server needs to verify a request, it looks up the key here. It supports revocation β if an agent is compromised, one DELETE call and every future request from that actor is rejected immediately.
The Middleware drops into any FastAPI application. On every incoming request it checks for Supelock headers, fetches the public key from the Registry, verifies the Ed25519 signature, checks expiry, prevents replay attacks, and validates that the signed method and path match the actual request. Then it runs a policy engine.
The Dashboard is a Next.js application that shows every request hitting the middleware in real time β color coded by trust level β and lets you edit the policy config from a browser UI without restarting anything.
3. The policy engine
Verification alone isn't enough. Knowing who an agent is doesn't tell you what they should be allowed to do. That's what the policy engine handles.
A site owner drops a supelock.yaml file in their project:
That's it. Verified agents get 1000 requests per minute. Anonymous traffic gets 10. The policy engine enforces four checks in order: trust gate, path rules, intent allow-list, and sliding window rate limit. First failure short-circuits and returns the appropriate HTTP error. No code changes needed.
verified vs anonymous
Signal, and Tor
an existing FastAPI app
4. What I actually built
Four GitHub repos, all working together:
- Supelock-SDK β Ed25519 keypair generation, local key storage, intent signing, auto-registration. One pip install, three lines of code.
- Supelock-Registry β FastAPI + SQLite. Register, look up, revoke actors. 12 passing tests.
- Supelock-Middleware β Starlette middleware. Signature verification, replay protection, policy enforcement, SSE event stream. 21 passing tests.
- Supelock-Dashboard β Next.js 14. Live request feed via SSE, actor viewer, policy editor with form and raw YAML toggle, two-agent demo with comparison chart.
The policy editor was one of the parts I'm most happy with. You can add policies, change rate limits, assign actors to tiers, and hit Save β it writes the YAML file and hot-reloads the engine in under a second. No restart. Changes take effect on the very next request.
5. The honest state of things
I want to be straight about where this is. Supelock is a proof of concept for a protocol, not a finished product. The cryptographic loop works. The policy engine works. The dashboard works. You can run the full stack locally in about 10 minutes.
But it has a two-sided adoption problem. For Supelock to be useful, both sides need to participate β agents need the SDK installed, and APIs need the middleware installed. Neither side has a strong reason to go first until the other side is already there.
The way I see this getting traction is one of three paths:
- A major agent framework β LangChain, CrewAI, something like that β ships Supelock support built in. Suddenly every agent using that framework has a verified identity automatically.
- One high-value API requires it. If a popular data provider says "verified Supelock agents get 10x the rate limit," developers have a concrete reason to add the SDK.
- An agent platform bundles it. A company building a product where users create AI agents could embed the SDK and give every agent a unique verifiable identity out of the box.
None of that has happened yet. But the underlying problem is real and getting more acute as AI agents become a bigger part of how software interacts with the web. The timing feels right to be building the infrastructure before everyone else builds their own incompatible version.
6. What's next
The immediate open items I'm thinking about:
- JavaScript/TypeScript SDK β Python is fine but most agents are built with Node
- Redis-backed rate limiter β the current in-memory limiter doesn't survive restarts or scale across multiple instances
- Reputation scoring β track behavior over time and adjust trust dynamically, not just binary high/low
- Published to PyPI β right now you have to install from source
If any of this sounds interesting to you β whether you're building agents, running APIs, or just think the problem is worth solving β the code is all open source. Issues, PRs, and opinions are welcome.